Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Behav Res Methods ; 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38528247

RESUMO

Questionnaires are ever present in survey research. In this study, we examined whether an indirect indicator of general cognitive ability could be developed based on response patterns in questionnaires. We drew on two established phenomena characterizing connections between cognitive ability and people's performance on basic cognitive tasks, and examined whether they apply to questionnaires responses. (1) The worst performance rule (WPR) states that people's worst performance on multiple sequential tasks is more indicative of their cognitive ability than their average or best performance. (2) The task complexity hypothesis (TCH) suggests that relationships between cognitive ability and performance increase with task complexity. We conceptualized items of a questionnaire as a series of cognitively demanding tasks. A graded response model was used to estimate respondents' performance for each item based on the difference between the observed and model-predicted response ("response error" scores). Analyzing data from 102 items (21 questionnaires) collected from a large-scale nationally representative sample of people aged 50+ years, we found robust associations of cognitive ability with a person's largest but not with their smallest response error scores (supporting the WPR), and stronger associations of cognitive ability with response errors for more complex than for less complex questions (supporting the TCH). Results replicated across two independent samples and six assessment waves. A latent variable of response errors estimated for the most complex items correlated .50 with a latent cognitive ability factor, suggesting that response patterns can be utilized to extract a rough indicator of general cognitive ability in survey research.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38460115

RESUMO

OBJECTIVES: Self-reported survey data are essential for monitoring the health and well-being of the population as it ages. For studies of aging to provide precise and unbiased results, it is necessary that the self-reported information meets high psychometric standards. In this study, we examined whether the quality of survey responses in panel studies of aging depends on respondents' cognitive abilities. METHODS: Over 17 million survey responses from 157,844 participants aged 50 years and older in 10 epidemiological studies of aging were analyzed. We derived 6 common statistical indicators of response quality from each participant's data and estimated the correlations with participants' cognitive test scores at each study wave. Effect sizes (correlations) were synthesized across studies, cognitive tests, and waves using individual participant data meta-analysis methods. RESULTS: Respondents with lower cognitive scores showed significantly more missing item responses (overall effect size ρ^ = -0.144), random measurement error (ρ^ = -0.192), Guttman errors (ρ^ = -0.233), multivariate outliers (ρ^ = -0.254), and acquiescent responses (ρ^ = -0.078); the overall effect for extreme responses (ρ^ = -0.045) was not significant. Effect sizes were consistent across studies, modes of survey administsration, and different cognitive functioning domains, although some cognitive domain specificity was also observed. DISCUSSION: Lower-quality responses among respondents with lower cognitive abilities add random and systematic errors to survey measures, reducing the reliability, validity, and reproducibility of survey study results in aging research.


Assuntos
Envelhecimento , Cognição , Humanos , Pessoa de Meia-Idade , Idoso , Reprodutibilidade dos Testes , Envelhecimento/psicologia , Inquéritos e Questionários , Cognição/fisiologia , Estudos Epidemiológicos
3.
BMJ Open ; 14(3): e079241, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38453191

RESUMO

OBJECTIVES: This paper examined the magnitude of differences in performance across domains of cognitive functioning between participants who attrited from studies and those who did not, using data from longitudinal ageing studies where multiple cognitive tests were administered. DESIGN: Individual participant data meta-analysis. PARTICIPANTS: Data are from 10 epidemiological longitudinal studies on ageing (total n=209 518) from several Western countries (UK, USA, Mexico, etc). Each study had multiple waves of data (range of 2-17 waves), with multiple cognitive tests administered at each wave (range of 4-17 tests). Only waves with cognitive tests and information on participant dropout at the immediate next wave for adults aged 50 years or older were used in the meta-analysis. MEASURES: For each pair of consecutive study waves, we compared the difference in cognitive scores (Cohen's d) between participants who dropped out at the next study wave and those who remained. Note that our operationalisation of dropout was inclusive of all causes (eg, mortality). The proportion of participant dropout at each wave was also computed. RESULTS: The average proportion of dropouts between consecutive study waves was 0.26 (0.18 to 0.34). People who attrited were found to have significantly lower levels of cognitive functioning in all domains (at the wave 2-3 years before attrition) compared with those who did not attrit, with small-to-medium effect sizes (overall d=0.37 (0.30 to 0.43)). CONCLUSIONS: Older adults who attrited from longitudinal ageing studies had lower cognitive functioning (assessed at the timepoint before attrition) across all domains as compared with individuals who remained. Cognitive functioning differences may contribute to selection bias in longitudinal ageing studies, impeding accurate conclusions in developmental research. In addition, examining the functional capabilities of attriters may be valuable for determining whether attriters experience functional limitations requiring healthcare attention.


Assuntos
Envelhecimento , Cognição , Idoso , Humanos , Atenção , Estudos Longitudinais , Projetos de Pesquisa , Pessoa de Meia-Idade
4.
Behav Res Methods ; 56(2): 765-783, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36840916

RESUMO

Interest in just-in-time adaptive interventions (JITAI) has rapidly increased in recent years. One core challenge for JITAI is the efficient and precise measurement of tailoring variables that are used to inform the timing of momentary intervention delivery. Ecological momentary assessment (EMA) is often used for this purpose, even though EMA in its traditional form was not designed specifically to facilitate momentary interventions. In this article, we introduce just-in-time adaptive EMA (JITA-EMA) as a strategy to reduce participant response burden and decrease measurement error when EMA is used as a tailoring variable in JITAI. JITA-EMA builds on computerized adaptive testing methods developed for purposes of classification (computerized classification testing, CCT), and applies them to the classification of momentary states within individuals. The goal of JITA-EMA is to administer a small and informative selection of EMA questions needed to accurately classify an individual's current state at each measurement occasion. After illustrating the basic components of JITA-EMA (adaptively choosing the initial and subsequent items to administer, adaptively stopping item administration, accommodating dynamically tailored classification cutoffs), we present two simulation studies that explored the performance of JITA-EMA, using the example of momentary fatigue states. Compared with conventional EMA item selection methods that administered a fixed set of questions at each moment, JITA-EMA yielded more accurate momentary classification with fewer questions administered. Our results suggest that JITA-EMA has the potential to enhance some approaches to mobile health interventions by facilitating efficient and precise identification of momentary states that may inform intervention tailoring.


Assuntos
Avaliação Momentânea Ecológica , Projetos de Pesquisa , Humanos , Fadiga , Simulação por Computador
5.
Field methods ; 35(2): 87-99, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37799827

RESUMO

Researchers have become increasingly interested in response times to survey items as a measure of cognitive effort. We used machine learning to develop a prediction model of response times based on 41 attributes of survey items (e.g., question length, response format, linguistic features) collected in a large, general population sample. The developed algorithm can be used to derive reference values for expected response times for most commonly used survey items.

6.
Psychosom Med ; 85(7): 627-638, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37363989

RESUMO

OBJECTIVE: Seminal advances in virtual human (VH) technology have introduced highly interactive, computer-animated VH interviewers. Their utility for aiding in chronic pain care is unknown. We developed three interactive telehealth VH interviews-a standard pain-focused, a psychosocial risk factor, and a pain psychology and neuroscience educational interview. We then conducted a preliminary investigation of their feasibility, acceptability, and efficacy. We also experimentally compared a human and a computer-generated VH voice. METHODS: Patients ( N = 94, age = 22-78 years) with chronic musculoskeletal pain were randomly assigned to the standard ( n = 31), psychosocial ( n = 34), or educational ( n = 29) VH interview and one of the two VH voices. Acceptability ratings included patient satisfaction and expectations/evaluations of the VH interview. Outcomes assessed at baseline and about 1-month postinterview were pain intensity, interference, emotional distress, pain catastrophizing, and readiness for pain self-management. Linear mixed-effects models were used to test between- and within-condition effects. RESULTS: Acceptability ratings showed that satisfaction with the VH and telehealth format was generally high, with no condition differences. Study attrition was low ( n = 5). Intent-to-treat-analyses showed that, compared with the standard interview, the psychosocial interview yielded a significantly greater reduction in pain interference ( p = .049, d = 0.43) and a marginally greater reduction in pain intensity ( p = .054, d = 0.36), whereas the educational interview led to a marginally greater yet nonsignificant increase in readiness for change ( p = .095, d = 0.24), as well as several significant improvements within-condition. Results did not differ by VH voice. CONCLUSIONS: Interactive VH interviewers hold promise for improving chronic pain care, including probing for psychosocial risk factors and providing pain-related education.


Assuntos
Dor Crônica , Humanos , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Dor Crônica/terapia , Dor Crônica/psicologia , Estudos de Viabilidade , Projetos Piloto , Satisfação do Paciente , Catastrofização
7.
JMIR Mhealth Uhealth ; 11: e45203, 2023 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-37252787

RESUMO

BACKGROUND: Various populations with chronic conditions are at risk for decreased cognitive performance, making assessment of their cognition important. Formal mobile cognitive assessments measure cognitive performance with greater ecological validity than traditional laboratory-based testing but add to participant task demands. Given that responding to a survey is considered a cognitively demanding task itself, information that is passively collected as a by-product of ecological momentary assessment (EMA) may be a means through which people's cognitive performance in their natural environment can be estimated when formal ambulatory cognitive assessment is not feasible. We specifically examined whether the item response times (RTs) to EMA questions (eg, mood) can serve as approximations of cognitive processing speed. OBJECTIVE: This study aims to investigate whether the RTs from noncognitive EMA surveys can serve as approximate indicators of between-person (BP) differences and momentary within-person (WP) variability in cognitive processing speed. METHODS: Data from a 2-week EMA study investigating the relationships among glucose, emotion, and functioning in adults with type 1 diabetes were analyzed. Validated mobile cognitive tests assessing processing speed (Symbol Search task) and sustained attention (Go-No Go task) were administered together with noncognitive EMA surveys 5 to 6 times per day via smartphones. Multilevel modeling was used to examine the reliability of EMA RTs, their convergent validity with the Symbol Search task, and their divergent validity with the Go-No Go task. Other tests of the validity of EMA RTs included the examination of their associations with age, depression, fatigue, and the time of day. RESULTS: Overall, in BP analyses, evidence was found supporting the reliability and convergent validity of EMA question RTs from even a single repeatedly administered EMA item as a measure of average processing speed. BP correlations between the Symbol Search task and EMA RTs ranged from 0.43 to 0.58 (P<.001). EMA RTs had significant BP associations with age (P<.001), as expected, but not with depression (P=.20) or average fatigue (P=.18). In WP analyses, the RTs to 16 slider items and all 22 EMA items (including the 16 slider items) had acceptable (>0.70) WP reliability. After correcting for unreliability in multilevel models, EMA RTs from most combinations of items showed moderate WP correlations with the Symbol Search task (ranged from 0.29 to 0.58; P<.001) and demonstrated theoretically expected relationships with momentary fatigue and the time of day. The associations between EMA RTs and the Symbol Search task were greater than those between EMA RTs and the Go-No Go task at both the BP and WP levels, providing evidence of divergent validity. CONCLUSIONS: Assessing the RTs to EMA items (eg, mood) may be a method of approximating people's average levels of and momentary fluctuations in processing speed without adding tasks beyond the survey questions.


Assuntos
Avaliação Momentânea Ecológica , Velocidade de Processamento , Adulto , Humanos , Tempo de Reação , Reprodutibilidade dos Testes , Estudos Longitudinais , Inquéritos e Questionários , Fadiga
8.
PLoS One ; 18(3): e0282591, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36893179

RESUMO

Although the potential for participant selection bias is readily acknowledged in the momentary data collection literature, very little is known about uptake rates in these studies or about differences in the people that participate versus those who do not. This study analyzed data from an existing Internet panel of older people (age 50 and greater) who were offered participation into a momentary study (n = 3,169), which made it possible to compute uptake and to compare many characteristics of participation status. Momentary studies present participants with brief surveys multiple times a day over several days; these surveys ask about immediate or recent experiences. A 29.1% uptake rate was observed when all respondents were considered, whereas a 39.2% uptake rate was found when individuals who did not have eligible smartphones (necessary for ambulatory data collection) were eliminated from the analyses. Taking into account the participation rate for being in this Internet panel, we estimate uptake rates for the general population to be about 5%. A consistent pattern of differences emerged between those who accepted the invitation to participate versus those who did not (in univariate analyses): participants were more likely to be female, younger, have higher income, have higher levels of education, rate their health as better, be employed, not be retired, not be disabled, have better self-rated computer skills, and to have participated in more prior Internet surveys (all p < .0026). Many variables were not associated with uptake including race, big five personality scores, and subjective well-being. For several of the predictors, the magnitude of the effects on uptake was substantial. These results indicate the possibility that, depending upon the associations being investigated, person selection bias could be present in momentary data collection studies.


Assuntos
Avaliação Momentânea Ecológica , Projetos de Pesquisa , Humanos , Feminino , Idoso , Pessoa de Meia-Idade , Masculino , Viés de Seleção , Inquéritos e Questionários , Smartphone
9.
JMIR Res Protoc ; 12: e44627, 2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36809337

RESUMO

BACKGROUND: Accumulating evidence shows that subtle alterations in daily functioning are among the earliest and strongest signals that predict cognitive decline and dementia. A survey is a small slice of everyday functioning; nevertheless, completing a survey is a complex and cognitively demanding task that requires attention, working memory, executive functioning, and short- and long-term memory. Examining older people's survey response behaviors, which focus on how respondents complete surveys irrespective of the content being sought by the questions, may represent a valuable but often neglected resource that can be leveraged to develop behavior-based early markers of cognitive decline and dementia that are cost-effective, unobtrusive, and scalable for use in large population samples. OBJECTIVE: This paper describes the protocol of a multiyear research project funded by the US National Institute on Aging to develop early markers of cognitive decline and dementia derived from survey response behaviors at older ages. METHODS: Two types of indices summarizing different aspects of older adults' survey response behaviors are created. Indices of subtle reporting mistakes are derived from questionnaire answer patterns in a number of population-based longitudinal aging studies. In parallel, para-data indices are generated from computer use behaviors recorded on the backend server of a large web-based panel study known as the Understanding America Study (UAS). In-depth examinations of the properties of the created questionnaire answer pattern and para-data indices will be conducted for the purpose of evaluating their concurrent validity, sensitivity to change, and predictive validity. We will synthesize the indices using individual participant data meta-analysis and conduct feature selection to identify the optimal combination of indices for predicting cognitive decline and dementia. RESULTS: As of October 2022, we have identified 15 longitudinal ageing studies as eligible data sources for creating questionnaire answer pattern indices and obtained para-data from 15 UAS surveys that were fielded from mid-2014 to 2015. A total of 20 questionnaire answer pattern indices and 20 para-data indices have also been identified. We have conducted a preliminary investigation to test the utility of the questionnaire answer patterns and para-data indices for the prediction of cognitive decline and dementia. These early results are based on only a subset of indices but are suggestive of the findings that we anticipate will emerge from the planned analyses of multiple behavioral indices derived from many diverse studies. CONCLUSIONS: Survey response behaviors are a relatively inexpensive data source, but they are seldom used directly for epidemiological research on cognitive impairment at older ages. This study is anticipated to develop an innovative yet unconventional approach that may complement existing approaches aimed at the early detection of cognitive decline and dementia. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/44627.

10.
Behav Res Methods ; 55(7): 3872-3891, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36261649

RESUMO

Psychology has witnessed a dramatic increase in the use of intensive longitudinal data (ILD) to study within-person processes, accompanied by a growing number of indices used to capture individual differences in within-person dynamics (WPD). The reliability of WPD indices is rarely investigated and reported in empirical studies. Unreliability in these indices can bias parameter estimates and yield erroneous conclusions. We propose an approach to (a) estimate the reliability and (b) correct for sampling error of WPD indices using "Level-1 variance-known" (V-known) multilevel models (Raudenbush & Bryk, 2002). When WPD indices are calculated for each individual, the sampling variance of the observed WPD scores is typically falsely assumed to be zero. V-known models replace this "zero" with an approximate sampling variance fixed at Level 1 to estimate the true variance of the index at Level 2, following random effects meta-analysis principles. We demonstrate how V-known models can be applied to a broad range of emotion dynamics commonly derived from ILD, including indices of the average level (mean), variability (intraindividual standard deviation), instability (probability of acute change), bipolarity (correlation), differentiation (intraclass correlation), inertia (autocorrelation), and relative variability (relative standard deviation) of emotions. A simulation study shows the usefulness of V-known models to recover the true reliability of these indices. Using a 21-day diary study, we illustrate the implementation of the proposed approach to obtain reliability estimates and to correct for unreliability of WPD indices in real data. The techniques may facilitate psychometrically sound inferences from WPD indices in this burgeoning research area.


Assuntos
Emoções , Humanos , Viés de Seleção , Reprodutibilidade dos Testes , Simulação por Computador , Análise Multinível
11.
Innov Aging ; 6(3): igac027, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35663275

RESUMO

Background and Objectives: It is widely recognized that survey satisficing, inattentive, or careless responding in questionnaires reduce the quality of self-report data. In this study, we propose that such low-quality responding (LQR) can carry substantive meaning at older ages. Completing questionnaires is a cognitively demanding task and LQR among older adults may reflect early signals of cognitive deficits and pathological aging. We hypothesized that older people displaying greater LQR would show faster cognitive decline and greater mortality risk. Research Design and Methods: We analyzed data from 9, 288 adults 65 years or older in the Health and Retirement Study. Indicators of LQR were derived from participants' response patterns in 102 psychosocial questionnaire items administered in 2006-2008. Latent growth models examined whether LQR predicted initial status and change in cognitive functioning, assessed with the modified Telephone Interview for Cognitive Status, over the subsequent 10 years. Discrete-time survival models examined whether LQR was associated with mortality risk over the 10 years. We also examined evidence for indirect (mediated) effects in which LQR predicts mortality via cognitive trajectories. Results: After adjusting for age, gender, race, marital status, education, health conditions, smoking status, physical activity, and depressive symptoms, greater LQR was cross-sectionally associated with poorer cognitive functioning, and prospectively associated with faster cognitive decline over the follow-up period. Furthermore, greater LQR was associated with increased mortality risk during follow-up, and this effect was partially accounted for by the associations between LQR and cognitive functioning. Discussion and Implications: Self-report questionnaires are not formally designed as cognitive tasks, but this study shows that LQR indicators derived from self-report measures provide objective, performance-based information about individuals' cognitive functioning and survival. Self-report surveys are ubiquitous in social science, and indicators of LQR may be of broad relevance as predictors of cognitive and health trajectories in older people.

12.
Appl Res Qual Life ; 17(1): 317-331, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35330704

RESUMO

Comparison standards that people use when responding to survey questions, also called Frames of Reference (FoRs), can influence the validity of self-report responses. The effects of FoRs might be the stronger for items using vague quantifier (VQ) scales, which are particularly prominent in quality of life research, compared with numeric responses. This study aims to investigate the impact of FoRs on self-report measures by examining how imposing a specific FoR in survey questions affects (a) the response levels of VQ and numeric scales and (b) the relationship between VQs and a quantitative responses to the same question. A sample of 1,869 respondents rated their education, commute and sleep duration, medication use, and level of physical activity using both VQ and numeric formats. Participants were asked to compare themselves with the average US adult, with their friends who are about their age, or did not receive specific instructions regarding a reference for comparison. We found that FoR conditions did not influence the numeric responses. Among the VQ responses, only education attainment was affected by FoR. The association between the numeric responses and vague quantifiers was comparable across different FoR conditions. Our results showed that manipulating the use of interpersonal FoRs had limited effect on the responses, which suggests that at least some comparisons do not have a strong biasing effect on self-report measures. However, future research should confirm this finding for using other FoRs (e.g., historical or hypothetical comparisons) and other outcome measures.

13.
Pain ; 163(1): 170-179, 2022 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-33974578

RESUMO

ABSTRACT: Despite tremendous efforts to increase the reliability of pain measures and other self-report instruments, improving or even evaluating the reliability of change scores has been largely neglected. In this study, we investigate the ability of 2 instruments from the Patient-Reported Outcomes Measurement Information System, pain interference (6 items) and pain behavior (7 items), to reliably detect individual changes in pain during the postsurgical period of a hernia repair in 98 patients who answered daily diaries over almost 3 weeks after surgery. To identify the most efficient strategy for obtaining sufficiently reliable estimates of change (reliability >0.9), the number of measurement occasions over the study period (sampling density), the number of items (test length), and the mode of administration (ie, static short form vs Computer adaptive testing) were manipulated in post-hoc simulations. Reliabilities for different strategies were estimated by comparing the observed change with the best approximation of "real" (ie, latent) change. We found (1) that near perfect reliability can be achieved if measures from all days over the whole study period, obtained with all pain interference or pain behavior items, were used to estimate the observed change, (2) that various combinations of the number of items and the number of measurement occasions could achieve acceptable reliability, and (3) that computer adaptive testings were superior to short forms in achieving sufficient reliability. We conclude that the specific strategy for assessing individual postoperative change in pain experience must be selected carefully.


Assuntos
Herniorrafia , Dor , Humanos , Psicometria , Reprodutibilidade dos Testes , Autorrelato
14.
J Intell ; 11(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36662133

RESUMO

Monitoring of cognitive abilities in large-scale survey research is receiving increasing attention. Conventional cognitive testing, however, is often impractical on a population level highlighting the need for alternative means of cognitive assessment. We evaluated whether response times (RTs) to online survey items could be useful to infer cognitive abilities. We analyzed >5 million survey item RTs from >6000 individuals administered over 6.5 years in an internet panel together with cognitive tests (numerical reasoning, verbal reasoning, task switching/inhibitory control). We derived measures of mean RT and intraindividual RT variability from a multilevel location-scale model as well as an expanded version that separated intraindividual RT variability into systematic RT adjustments (variation of RTs with item time intensities) and residual intraindividual RT variability (residual error in RTs). RT measures from the location-scale model showed weak associations with cognitive test scores. However, RT measures from the expanded model explained 22−26% of the variance in cognitive scores and had prospective associations with cognitive assessments over lag-periods of at least 6.5 years (mean RTs), 4.5 years (systematic RT adjustments) and 1 year (residual RT variability). Our findings suggest that RTs in online surveys may be useful for gaining information about cognitive abilities in large-scale survey research.

15.
Alzheimers Dement (Amst) ; 13(1): e12252, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34934800

RESUMO

INTRODUCTION: We investigate whether indices of subtle reporting mistakes derived from responses in self-report surveys are associated with dementia risk. METHODS: We examined 13,831 participants without dementia from the prospective, population-based Health and Retirement Study (mean age 69 ± 10 years, 59% women). Participants' response patterns in 21 questionnaires were analyzed to identify implausible responses (multivariate outliers), incompatible responses (Guttman errors), acquiescent responses, random errors, and the proportion of skipped questions. Subsequent incident dementia was determined over up to 10 years of follow-up. RESULTS: During follow-up, 2074 participants developed dementia and 3717 died. Each of the survey response indices was associated with future dementia risk controlling for confounders and accounting for death as a competing risk. Stronger associations were evident for participants who were younger and cognitively normal at baseline. DISCUSSION: Mistakes in the completion of self-report surveys in longitudinal studies may be early indicators of dementia among middle-aged and older adults.

16.
Psychol Aging ; 36(6): 679-693, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34516172

RESUMO

Emotions and symptoms are often overestimated in retrospective ratings, a phenomenon referred to as the "memory-experience gap." Some research has shown that this gap is less pronounced among older compared to younger adults for self-reported negative affect, but it is not known whether these age differences are evident consistently across domains of well-being and why these age differences emerge. In this study, we examined age differences in the memory-experience gap for emotional (positive and negative affect), social (loneliness), and physical (pain, fatigue) well-being. We also tested four variables that could plausibly explain age differences in the gap: (a) episodic memory and executive functioning, (b) the age-related positivity effect, (c) variability of daily experiences, and (d) socially desirable responding. Adults (n = 477) from three age groups (21-44, 45-64, 65+ years old) participated in a 21-day diary study. Participants completed daily end-of-day ratings and retrospective ratings of the same constructs over different recall periods (3, 7, 14, and 21 days). Results showed that, relative to young and middle-aged adults, older adults had a smaller memory-experience gap for negative affect and loneliness. Lower day-to-day variability partly explained why the gap was smaller for older adults. There was no evidence that the magnitude of the memory-experience gap for positive affect, pain or fatigue depended on age. We recommend that future research considers how variability in daily experiences can impact age differences in retrospective self-reports of well-being. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Envelhecimento/psicologia , Memória Episódica , Rememoração Mental , Adulto , Afeto , Idoso , Fadiga , Feminino , Humanos , Solidão , Masculino , Pessoa de Meia-Idade , Dor , Estudos Retrospectivos , Adulto Jovem
17.
JMIR Form Res ; 5(5): e28007, 2021 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-34037524

RESUMO

BACKGROUND: Ecological momentary assessment (EMA) has the potential to minimize recall bias by having people report on their experiences in the moment (momentary model) or over short periods (coverage model). This potential hinges on the assumption that participants provide their ratings based on the reporting time frame instructions prescribed in the EMA items. However, it is unclear what time frames participants actually use when answering the EMA questions and whether participant training improves participants' adherence to the reporting instructions. OBJECTIVE: This study aims to investigate the reporting time frames participants used when answering EMA questions and whether participant training improves participants' adherence to the EMA reporting timeframe instructions. METHODS: Telephone-based cognitive interviews were used to investigate the research questions. In a 2×2 factorial design, participants (n=100) were assigned to receive either basic or enhanced EMA training and randomized to rate their experiences using a momentary (at the moment you were called) or a coverage (since the last phone call) model. Participants received five calls over the course of a day to provide ratings; after each rating, participants were immediately interviewed about the time frame they used to answer the EMA questions. A total of 2 raters independently coded the momentary interview responses into time frame categories (Cohen κ=0.64, 95% CI 0.55-0.73). RESULTS: The results from the momentary conditions showed that most of the calls referred to the period during the call (57/199, 28.6%) or just before the call (98/199, 49.2%) to provide ratings; the remainder were from longer reporting periods. Multinomial logistic regression results indicated a significant training effect (χ21=16.6; P<.001) in which the enhanced training condition yielded more reports within the intended reporting time frames for momentary EMA reports. Cognitive interview data from the coverage model did not lend themselves to reliable coding and were not analyzed. CONCLUSIONS: The results of this study provide the first evidence about adherence to EMA instructions to reporting periods and that enhanced participant training improves adherence to the time frame specified in momentary EMA studies.

18.
Front Pain Res (Lausanne) ; 2: 692567, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35295477

RESUMO

Introduction: Effective clinical care for chronic pain requires accurate, comprehensive, meaningful pain assessment. This study investigated healthcare providers' perspectives on seven pain measurement indices for capturing pain intensity. Methods: Semi-structured telephone interviews were conducted with a purposeful sample from four US regions of 20 healthcare providers who treat patients with chronic pain. The qualitative interview guide included open-ended questions to address perspectives on pain measurement, and included quantitative ratings of the importance of seven indices [average pain, worst pain, least pain, time in no/low pain, time in high pain, fluctuating pain, unpredictable pain]. Qualitative interview data were read, coded and analyzed for themes and final interpretation. Standard quantitative methods were used to analyze index importance ratings. Results: Despite concerns regarding 10-point visual analog and numeric rating scales, almost all providers used them. Providers most commonly asked about average pain, although they expressed misgivings about patient reporting and the index's informational value. Some supplemented average with worst and least pain, and most believed pain intensity is best understood within the context of patient functioning. Worst pain received the highest mean importance rating (7.60), average pain the second lowest rating (5.65), and unpredictable pain the lowest rating (5.20). Discussion: Assessing average pain intensity obviates obtaining clinical insight into daily contextual factors relating to pain and functioning. Pain index use, together with timing, functionality and disability, may be most effective for understanding the meaning to patients of high pain, how pain affects their life, how life affects their pain, and how pain changes and responds to treatment.

20.
J Health Psychol ; 26(13): 2577-2591, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32419503

RESUMO

This feasibility study employed a new approach to capturing pain disclosure in face-to-face and online interactions, using a newly developed tool. In Study 1, 13 rheumatoid arthritis and 52 breast cancer patients wore the Electronically Activated Recorder to acoustically sample participants' natural conversations. Study 2 obtained data from two publicly available online social networks: fibromyalgia (343,439 posts) and rheumatoid arthritis (12,430 posts). Pain disclosure, versus non-pain disclosure, posts had a greater number of replies, and greater engagement indexed by language style matching. These studies yielded novel, multimethod evidence of how pain disclosure unfolds in naturally occurring social contexts in everyday life.


Assuntos
Neoplasias da Mama , Revelação , Comunicação , Feminino , Humanos , Idioma , Dor
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA